Goto

Collaborating Authors

 domain value


Leveraging Professional Ethics for Responsible AI

Communications of the ACM

Artificial Intelligence (AI) is proliferating throughout society, but so too are calls for practicing Responsible AI.4 The ACM Code of Ethics and Professional Conduct states computing professionals should contribute to society and human well-being (General Ethical Principle 1.1), but it can be difficult for a computer scientist to judge the impacts of a particular application in all fields. AI is influencing a range of social domains from law and medicine to journalism, government, and education. Technologists do not just need to make the technology work and scale it up, they must make it work while also being responsible for a host of societal, ethical, legal, and other human-centered concerns in these domains.11 There is no shortcut to becoming an expert social scientist, ethicist, or legal scholar.


Certifiable Robustness for Naive Bayes Classifiers

Bian, Song, Ouyang, Xiating, Fan, Zhiwei, Koutris, Paraschos

arXiv.org Artificial Intelligence

Data cleaning is crucial but often laborious in most machine learning (ML) applications. However, task-agnostic data cleaning is sometimes unnecessary if certain inconsistencies in the dirty data will not affect the prediction of ML models to the test points. A test point is certifiably robust for an ML classifier if the prediction remains the same regardless of which (among exponentially many) cleaned dataset it is trained on. In this paper, we study certifiable robustness for the Naive Bayes classifier (NBC) on dirty datasets with missing values. We present (i) a linear time algorithm in the number of entries in the dataset that decides whether a test point is certifiably robust for NBC, (ii) an algorithm that counts for each label, the number of cleaned datasets on which the NBC can be trained to predict that label, and (iii) an efficient optimal algorithm that poisons a clean dataset by inserting the minimum number of missing values such that a test point is not certifiably robust for NBC. We prove that (iv) poisoning a clean dataset such that multiple test points become certifiably non-robust is NP-hard for any dataset with at least three features. Our experiments demonstrate that our algorithms for the decision and data poisoning problems achieve up to $19.5\times$ and $3.06\times$ speed-up over the baseline algorithms across different real-world datasets.


A Tool to Graphically Edit CP-Nets

Shafran, Aidan (Madison Central High School) | Saarinen, Sam (University of Kentucky) | Goldsmith, Judy (University of Kentucky)

AAAI Conferences

Qualitative preferences over outcomes in a combinatorial domain (where many variables jointly describe the outcome) The CP-net visualizer presented is useful for researchers are useful in automated decision making and modeling human eliciting human preferences, building CP-nets for specific preferences in real world domains. Conditional Preference experiments, visualizing generated CP-nets, and for the general Networks (CP-nets), also known as Ceteris Paribus public learning more about preference modeling. It has Networks, are a compact graph-based mathematical formalism an interface consisting of three vertical panels. On the left is for representing such preferences (Boutilier et al. 2004).


Real-Time Optimal Selection of Multirobot Coalition Formation Algorithms Using Conceptual Clustering

Sen, Sayan Dev (Vanderbilt University) | Adams, Julie Ann (Vanderbilt University)

AAAI Conferences

The presented framework is the The multirobot coalition formation problem seeks to intelligently first to leverage a conceptual clustering technique to partition partition a team of heterogeneous robots into any set of coalition formation algorithms in order to derive coalitions for a set of real-world tasks. Besides being N Pan optimal hierarchy classification tree, given any classification complete (Sandholm et al. 1999), the problem is also hard taxonomy. The results contribute to the state-ofthe-art to approximate (Service and Adams 2011a). Traditional approaches in multiagent systems by demonstrating the existence to solving the problem include a number of greedy of crucial patterns and intricate relationships among existing algorithms (Shehory and Kraus 1998; Vig and Adams coalition algorithms.


The Tractability of CSP Classes Defined by Forbidden Patterns

Cohen, D. A., Cooper, M. C., Creed, P., Marx, D., Salamon, A. Z.

Journal of Artificial Intelligence Research

The constraint satisfaction problem (CSP) is a general problem central to computer science and artificial intelligence. Although the CSP is NP-hard in general, considerable effort has been spent on identifying tractable subclasses. The main two approaches consider structural properties (restrictions on the hypergraph of constraint scopes) and relational properties (restrictions on the language of constraint relations). Recently, some authors have considered hybrid properties that restrict the constraint hypergraph and the relations simultaneously. Our key contribution is the novel concept of a CSP pattern and classes of problems defined by forbidden patterns (which can be viewed as forbidding generic sub-problems). We describe the theoretical framework which can be used to reason about classes of problems defined by forbidden patterns. We show that this framework generalises certain known hybrid tractable classes. Although we are not close to obtaining a complete characterisation concerning the tractability of general forbidden patterns, we prove a dichotomy in a special case: classes of problems that arise when we can only forbid binary negative patterns (generic sub-problems in which only disallowed tuples are specified). In this case we show that all (finite sets of) forbidden patterns define either polynomial-time solvable or NP-complete classes of instances.


Modular Schemes for Constructing Equivalent Boolean Encodings of Cardinality Constraints and Application to Error Diagnosis in Formal Verification of Pipelined Microprocessors

Velev, Miroslav N. (Aries Design Automation) | Gao, Ping (Aries Design Automation)

AAAI Conferences

We present a novel method for generating a wide range of equivalent Boolean encodings of cardinality, while in contrast all previous Boolean encodings of cardinality have only one form. Experiments for applying this method to automated error diagnosis in formal verification of buggy variants of a complex reconfigurable VLIW processor indicate speedup of up to two orders of magnitude, relative to previous encodings of cardinality. Besides automated debugging of hardware and software, the presented Boolean encodings of cardinality have applications to many other problems.


A Generic Global Constraint based on MDDs

Tiedemann, Peter, Andersen, Henrik Reif, Pagh, Rasmus

arXiv.org Artificial Intelligence

Constraint Programming (CP)[21] is a powerful technique for spec ifying Constraint Satisfaction Problems (CSPs) based on allowing a constraintprogrammer to model problems in terms of high-level constraints. Using such global constraints allows easier specification of problems but also allows for faster solve rs that take advantage of the structure in the problem. The classica l approach to CSP solving is to explore the search tree of all possible assignment s to the variables in a depth-first search backtracking manner, guided by v arious heuristics, until a solution is found or proven not to exist. One of the most basic techniques for reducing the number of search tree nodes explore d is to perform domain propagation at each node. In order to get as much domain propagation as possible we wish for each constraint to remove from the variable d omains all values that cannot participate in a solution to that constraint.